perm filename CH2[206,JMC] blob
sn#070502 filedate 1973-11-02 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 CHAPTER II
C00018 ENDMK
C⊗;
CHAPTER II
HOW TO WRITE RECURSIVE FUNCTION DEFINITIONS
1. Static and dynamic ways of programming.
In order to write recursive function definitions, one must
think about programming differently than is customary when writing
programs in languages like Fortran or Algol or in machine language.
In these languages, one has in mind the state of the computation as
represented by the values of certain variables or locations in the
memory of the machine, and then one writes statements or machine
instructions in order to make the state change in an appropriate way.
When writing LISP recursive functions one thinks differently.
Namely, one thinks about the value of the function, asks for what
values of the arguments the value of the function is immediate, and,
given an arbitrary values of the arguments, for what simpler
arguments must the function be known in order to give the value of
the function for the given arguments. Let us take a numerical
example; namely, suppose we want to compute the function n!. For
what argument is the value of the function immediate. Clearly, for
n = 0 or n = 1, the value is immediately seen to be 1. Moreover,
we can get the value for an arbitrary n if we know the value for
n-1. Also, we see that knowing the value for n = 1 is redundant,
since it can be obtained from the n = 0 case by the same rule as
gets it for a general n from the value for n-1. All this talk
leads to the simple recursive formula:
n! ← if n = 0 then 1 else n(n-1)!.
We may regard this as a static way of looking at programming.
We ask what simpler cases the general case of our function depends on
rather than how we build up the desired state of the computation.
One often is led to believe that static = bad and dynamic = good,
but in this case, the static way is often better than the dynamic
way. LISP offers both, but since it is less usual, we shall
concentrate on the static way for now.
Compare the above recursive definition with the following
obvious Algol program for computing n!:
integer procedure factorial(n); integer s;
begin
s := 1;
loop: if n = 0 then go to done;
s := n*s;
n := n-1;
go to loop;
done: factorial := s;
end;
The LISP program is shorter and clearer in this particularly
favorable case. Actually, when we discuss the mechanism of
recursion, it will turn out that the LISP program will be inefficient
in using the pushdown mechanism unnecessarily and should be replaced
by the following somewhat longer program that corresponds to the
above Algol program rather precisely:
n! ← fact(n,1),
where
fact(n,s) ← if n = 0 then s else fact(n-1,n*s).
In fact, compilers should produce the same object code from the two
programs.
2. Simple list recursion.
About the simplest form of recursion in LISP occurs when one
of the arguments is a list, the result is immediate when the argument
is null, and otherwise we need only know the result for the d-part of
that argument. Consider, for example, u*v, the concatenation of the
lists u and v. The result is immediate for the case n u and
otherwise depends on the result for d u. Thus, we have
u*v ← if n u then v else a u . [d u * v].
On the other hand, if we had tried to recur on v rather than on u,
we would not have been successful. The result would be immediate for
n v, but u*v cannot be constructed in any direct way from u*d v
without a function that puts an element onto the end of a list.
(From a strictly list point of view, such a function would be as
elementary as cons which puts an element onto the front of a list,
but, when we consider the implementation of lists by list structures,
we see that the function is not so elementary. This has led some
people to construct systems in which lists are bi-directional, but,
in the main, this has turned out to be a bad idea). Anyway, it is
usually easier to recur on one argument of a function than to recur
on the other.
It is often necessary to represent a correspondence between
the elements of a small set of atoms and certain S-expression by a
list structure. This is conveniently done by means of an association
list which is a list of pairs, each pair consisting of an atom and
the corresponding S-expression. Thus the association list
((X.(PLUS A B)) (Y.C) (Z.(TIMES U V)),
which would print as
((X PLUS A B)) (Y.C) (Z TIMES U V)),
pairs X with (PLUS A B), Y with C, etc. We need a function to
tell whether anything is associated with the atom x in the
association list a, and, if so, to tell us what. We have
assoc[x,a] ← if n a then NIL else if x eq aa a then a a
else assoc[x,d a].
Its value is NIL if nothing is associated with x and the
association pair otherwise. E.g. assoc[X,((X.W) (Y.V))] = (X.W).
It commonly happens that a function has no simple recursion,
but there is a simple recursion for a function with one more variable
that reduces to the desired function when the extra variable is set
to NIL. Thus
reverse[u] ← rev1[x,NIL],
where
rev1[u,v] ← if n u then v else rev1[d u,a u . v].
reverse has a direct recursive definition as discovered by S. Ness,
but no-one would want to use the following in actual computation nor
does it generate much understanding, only appreciation of Mr. Ness's
ingenuity:
reverse[u] ← if n u ∨ n d u then u else
a reverse[d u] . reverse[a x. reverse[d reverse[d u]]].
Exercises
1. Using the function member[x,u] defined in Chapter I and
Which may also be written x ε u, write function definitions for the
union u ∪ v of lists u and v, the intersection u ∩ v, and the
set difference u-v. What is wanted should be clear from the
examples:
(A B C) ∪ (B C D) = (A B C D),
(A B C) ∩ (B C D) = (B C),
and
(A B C) - (B C D) = (A).
Pay attention to betting correct the trivial cases in which some of
the arguments are NIL. In general, it is important to understand
clearly the trivial cases of functions.
2. Suppose x takes numbers as values and u takes as
values lists of numbers in ascending order, e.g. (2 4 7). Write a
function merge[x,u] whose value is obtained from that of u by
putting x in u in its proper place. Thus
merge[3,(2 4)] = (2 3 4), and merge[3,(2 3)] = (2 3 3).
3. Write functions giving the union, intersection, and set
difference of ordered lists; the result is wanted as an ordered list.
Note that computing these functions of unordered lists takes
a number of comparisons proportional to the square of the number of
elements of a typical list, while for ordered lists, the number of
comparisons is proportional to the number of elements.
4. Using merge, write a function sort that transforms an
unordered list into an ordered list.
5. Write a function goodsort that sorts a list using a
number of comparisons proportional to n log n, where n is the
length of the list to be sorted.
3. Simple S-expression recursion.
In another class of problems, the value of the function is
immediate for atomic symbols, and for non atoms depends only on the
values for the a-part and the d-part of the argument. Thus subst
was defined by
subst[x,y,z] ← if at z then [if z eq y then x else z]
else subst[x,y,a z].subst[x,y,d z].
Two other examples are equal which gives the equality of
S-expressions and flat which spreads and S-expression into a list
of atoms: They are defined by
x=y ← x eq y ∨ [¬at x ∧ ¬at y ∧ a x = a y ∧ d x = d y],
and
flat[x] ← flata[x,NIL]
where
flata[x,u] ← if at x then x.y else flata[a x,flata[d x,y]].
EXERCISES
1. Write a predicate to tell whether a given atom occurs in a
given S-expression, e.g. occur[B,((A.B).C)] = T.
2. Write a predicate to tell how many times a given atom
occurs in an S-expression.
3. Write a function to make a list without duplications of
the atoms occurring in an S-expression.
4. Write a function to make a list of all atoms that occur
more than once in a given S-expression paired with their
multiplicities.
5. Write a predicate to tell whether an S-expression has more
than one occurrence of a given S-expression as a sub-expression.
4. Other structural recursions.
When lists are used to represent algebraic expressions,
functions of these algebraic expressions often have a
recursive form closely related to the inductive definition of the
expressions. Suppose, for example, that sums and products are represented
respectively by the forms (PLUS X Y) and (TIMES X Y) and that the
values of variables are given on an a-list. We can write a recursive
formula for the value of an expression as follows:
value[e,a] ← if at e then d assoc[e,a]
else if a e eq PLUS then value[ad e,a] + value[add e,a]
else if a e eq TIMES then value[ad e,a] * value[add e,a].
On the other hand, suppose that sums and products are not restricted
to have just two arguments; then we must use auxiliary functions
to define the value of an expression, as follows:
value[e,a] ← if at e then d assoc[e,a]
else if a e eq PLUS then vplus[d e,a]
else if a e eq TIMES then vtimes[d e,a].
where
vplus[u,a] ← if n u then 0 else value[a u,a] + vplus[d u,a],
and
vtimes[u,a] ← if n u then 1 else value[a u,a] * vtimes[d u,a].
In both cases, the recursion form is related to the structure of the
algebraic expressions more closely than to the structure of
S-expressions or lists.
5. Tree search.